林嶔 (Lin, Chin)
Lesson 1
資料科學是一門探索未知的學問,我們將運用現有的資料,建構資料之間的關聯性。而我們該如何進行預測呢?
在數學上,假定我們的Input是\(x\)物件,而Output是\(y\)物件,那我們可以建構一個『預測函數』\(f()\)來對其進行預測:
\[\hat{y} = f(x)\]
\[\hat{y} = f(x) = b_{0} + b_{1}x\]
\[loss = diff(y, \hat{y})\] - 以簡單線性迴歸的損失函數為例,所求的值為殘差平方和,可將此式改寫為:
\[loss = diff(y, \hat{y}) = \sum \limits_{i=1}^{n} \left(y_{i} - \hat{y_{i}}\right)^{2}\]
\[loss = diff(y, f(x))\]
\[loss = diff(y, f(x)) = \sum \limits_{i=1}^{n} \left(y_{i} - \left(b_{0} + b_{1}x_{1,i}\right)\right)^{2}\]
\[min(loss)\]
\[f(x) = x^{2} + 2x + 1\] - 接著,我們對上述函數進行微分,並尋找微分後函數為0的位置,將可以知道此函數的極值位置:
\[\frac{\partial}{\partial x} f(x) = 2x + 2 = 0\]
\[x = -1\]
為什麼我們能夠利用『微分』求函數的極值?這邊大家可能要複習一下基本觀念,對一個『函數』進行『微分』所獲得的『導函數』其實就是該函數的『切線斜率函數』,而『切線斜率函數』等於0的位置就暗示著函數不經過一系列的上升/下降後停止變化,那當然這個位置就是極值所在。
然而,剛剛的求極值過程中有一個非常討厭的部分,那就是要求一個「一元一次方程式」,而當函數複雜一點,我們將要求「N元M次聯立方程式」的答案,那將會讓整個過程異常複雜,所以我們要尋求其他解決方案。
在這裡我們隆重介紹『梯度下降法』。首先我們要先定義何謂『梯度』?所謂的『梯度』其實就是『斜率』(注意,這個定義並不精確,但為了省略太多複雜的數學語言,我們暫且使用這個定義)。在這個定義之下,『梯度下降法』意思就是我們在『求解極值』的過程中,隨著『梯度』進行移動,從而找到極值的過程。
下面以找到剛剛那個函數「\(f(x)\)」的極值為例,我們先隨機指定一個起始值,並定義他是第0代:
\[x_{\left(epoch:0\right)} = 10\]
\[x_{\left(epoch:t\right)} = x_{\left(epoch:t - 1\right)} - lr \cdot \frac{\partial}{\partial x}f(x_{\left(epoch:t - 1\right)})\] - 由於剛剛函數的導函數為「\(2x + 2\)」,我們可以將式子帶入運算:
\[ \begin{align} x_{\left(epoch:1\right)} & = x_{\left(epoch:0\right)} - lr \cdot \frac{\partial}{\partial x}f(x_{\left(epoch:0\right)}) \\ & = 10 - lr \cdot \frac{\partial}{\partial x}f(10) \\ & = 10 - 0.05 \cdot (2\cdot10+2)\\ & = 8.9 \end{align} \]
\[ \begin{align} x_{\left(epoch:2\right)} & = x_{\left(epoch:1\right)} - lr \cdot \frac{\partial}{\partial x}f(x_{\left(epoch:1\right)}) \\ & = 8.9 - lr \cdot \frac{\partial}{\partial x}f(8.9) \\ & = 8.9 - 0.05 \cdot (2\cdot8.9+2)\\ & = 7.91 \end{align} \]
\[ \begin{align} x_{\left(epoch:3\right)} & = 7.91 - 0.891 = 7.019 \\ x_{\left(epoch:4\right)} & = 7.019 - 0.8019 = 6.2171 \\ x_{\left(epoch:5\right)} & = 6.2171 - 0.72171 = 5.49539 \\ & \dots \\ x_{\left(epoch:\infty\right)} & = -1 \end{align} \]
\[f(x) = x^{2}\]
original.fun = function(x) {
return(x^2)
}
differential.fun = function(x) {
return(2*x)
}
start.value = 5
learning.rate = 0.1
num.iteration = 1000
result.x = rep(NA, num.iteration)
for (i in 1:num.iteration) {
if (i == 1) {
result.x[1] = start.value
} else {
result.x[i] = result.x[i-1] - learning.rate * differential.fun(result.x[i-1])
}
}
print(tail(result.x, 1))
[1] 7.68895e-97
– 在使用梯度下降法時,原則上learning.rate不宜設置太大,但可以觀察收斂速度,若收斂速度太慢再適當的調整為佳。
\[f(a, b) = 4a^{2}- 4a + b^{2} - 2b + 5\]
\[ \begin{align} f(a, b) & = (4a^{2} - 4a + 1) + (b^{2} - 2b + 1) + 3 \\ & = (2a - 1)^{2} + (b - 1)^{2} + 3 \end{align} \]
\[ \begin{align} \frac{\partial}{\partial a} f(a, b) & = 8a - 4 \\ \frac{\partial}{\partial b} f(a, b) & = 2b - 2 \end{align} \]
\[ \begin{align} a_{\left(epoch:t\right)} & = a_{\left(epoch:t - 1\right)} - lr \cdot \frac{\partial}{\partial a}f(a_{\left(epoch:t - 1\right)}, b_{\left(epoch:t - 1\right)}) \\ b_{\left(epoch:t\right)} & = b_{\left(epoch:t - 1\right)} - lr \cdot \frac{\partial}{\partial b}f(a_{\left(epoch:t - 1\right)}, b_{\left(epoch:t - 1\right)}) \end{align} \]
\[ \begin{align} a_{\left(epoch:0\right)} & = 0 \\ b_{\left(epoch:0\right)} & = 0 \\\\ a_{\left(epoch:1\right)} & = a_{\left(epoch:0\right)} - lr \cdot \frac{\partial}{\partial a}f(a_{\left(epoch:0\right)}, b_{\left(epoch:0\right)}) \\ & = 0 - lr \cdot \frac{\partial}{\partial a}f(0, 0) \\ & = 0 - 0.1 \cdot (8\cdot0-4)\\ & = 0.4 \\ b_{\left(epoch:1\right)} & = b_{\left(epoch:0\right)} - lr \cdot \frac{\partial}{\partial b}f(a_{\left(epoch:0\right)}, b_{\left(epoch:0\right)}) \\ & = 0 - lr \cdot \frac{\partial}{\partial b}f(0, 0) \\ & = 0 - 0.1 \cdot (2\cdot0-2)\\ & = 0.2 \\ \end{align} \]
\[ \begin{align} a_{\left(epoch:2\right)} & = 0.4 + 0.08 = 0.48\\ b_{\left(epoch:2\right)} & = 0.2 + 0.16 = 0.36\\\\ a_{\left(epoch:3\right)} & = 0.48 + 0.016 = 0.496\\ b_{\left(epoch:3\right)} & = 0.36 + 0.128 = 0.488\\\\ a_{\left(epoch:4\right)} & = 0.496 + 0.0032 = 0.4992\\ b_{\left(epoch:4\right)} & = 0.488 + 0.1024 = 0.5904\\\\ & \dots \\\\ a_{\left(epoch:\infty\right)} & = 0.5\\ b_{\left(epoch:\infty\right)} & = 1 \end{align} \]
– 預測函數
\[\hat{y_{i}} = f(x) = b_{0} + b_{1}x_{i}\]
– 損失函數
\[loss = diff(y, \hat{y}) = \frac{{1}}{2n}\sum \limits_{i=1}^{n} \left(y_{i} - \hat{y_{i}}\right)^{2}\]
– 整合預測函數與損失函數
\[loss = diff(y, f(x)) = \frac{{1}}{2n}\sum \limits_{i=1}^{n} \left(y_{i} - \left(b_{0} + b_{1}x_{i}\right)\right)^{2}\]
– 你可能需要複習一下連鎖率
\[\frac{\partial}{\partial x}h(x) = \frac{\partial}{\partial x}f(g(x)) = \frac{\partial}{\partial g(x)}f(g(x)) \cdot\frac{\partial}{\partial x}g(x)\]
– \(b_0\)以及\(b_1\)的偏導函數
\[ \begin{align} \frac{\partial}{\partial b_0}loss & = \frac{{1}}{2n}\sum \limits_{i=1}^{n} \frac{\partial}{\partial \hat{y_{i}}} \left(y_{i} - \hat{y_{i}}\right)^{2} \cdot \frac{\partial}{\partial b_0} \hat{y_{i}}\\ & = \frac{1}{n} \sum \limits_{i=1}^{n}\left( \hat{y_{i}} - y_{i} \right) \cdot \frac{\partial}{\partial b_0} (b_{0} + b_{1}x_{i}) \\ & = \frac{1}{n} \sum \limits_{i=1}^{n}\left( \hat{y_{i}} - y_{i} \right) \\\\ \frac{\partial}{\partial b_1}loss & = \frac{1}{n} \sum \limits_{i=1}^{n}\left( \hat{y_{i}} - y_{i} \right) \cdot x_{i} \end{align} \]
\[ \begin{align} b_{0\left(epoch:t\right)} & = b_{0\left(epoch:t - 1\right)} - lr \cdot \frac{\partial}{\partial b_{0}}loss \\ b_{1\left(epoch:t\right)} & = b_{1\left(epoch:t - 1\right)} - lr \cdot \frac{\partial}{\partial b_{1}}loss \end{align} \]
x = c(1, 2, 3, 4, 5)
y = c(6, 7, 9, 8, 10)
original.fun = function(b0, b1, x = x, y = y) {
y.hat = b0 + b1 * x
return(sum((y.hat - y)^2)/2/length(x))
}
differential.fun.b0 = function(b0, b1, x = x, y = y) {
y.hat = b0 + b1 * x
return(sum(y.hat - y)/length(x))
}
differential.fun.b1 = function(b0, b1, x = x, y = y) {
y.hat = b0 + b1 * x
return(sum((y.hat - y)*x)/length(x))
}
model = lm(y~x)
print(model)
##
## Call:
## lm(formula = y ~ x)
##
## Coefficients:
## (Intercept) x
## 5.3 0.9
lr = 0.1
num.iteration = 1000
ans_b0 = rep(0, num.iteration)
ans_b1 = rep(0, num.iteration)
for (i in 2:num.iteration) {
ans_b0[i+1] = ans_b0[i] - lr * differential.fun.b0(b0 = ans_b0[i], b1 = ans_b1[i], x = x, y = y)
ans_b1[i+1] = ans_b1[i] - lr * differential.fun.b1(b0 = ans_b0[i], b1 = ans_b1[i], x = x, y = y)
}
print(tail(ans_b0, 1))
[1] 5.3
print(tail(ans_b1, 1))
[1] 0.9000001
– 邏輯斯迴歸的預測方程:
\[lp_i= log(\frac{{p_i}}{1-p_i}) = b_{0} + b_{1}x_i\]
– 其中\(p\)代表著是樣本為正的預測機率。我們先將此式稍作改寫,改成標準型態:
\[p_i = \frac{{1}}{1+e^{-lp_i}} = \frac{{1}}{1+e^{-b_{0} - b_{1}x_i}}\]
\[loss = diff(y, p)\]
\[loss = diff(y, p) = \frac{{1}}{2n}\sum \limits_{i=1}^{n} \left(y_{i} - p_{i}\right)^{2}\]
\[loss = \frac{{1}}{2n} \sum \limits_{i=1}^{n} \left(y_{i} - \frac{{1}}{1+e^{-b_{0} - b_{1}x_{i}}}\right)^{2}\]
– 這個過程實在是複雜了點,讓我們列出幾個微分公式輔助進行:
\[\frac{\partial}{\partial x}h(x) = \frac{\partial}{\partial x}f(g(x)) = \frac{\partial}{\partial g(x)}f(g(x)) \cdot\frac{\partial}{\partial x}g(x)\]
\[\frac{\partial}{\partial x}\frac{{f(x)}}{g(x)} = \frac{{g(x) \cdot \frac{\partial}{\partial x} f(x)} - {f(x) \cdot \frac{\partial}{\partial x} g(x)}}{g(x)^2}\]
\[\frac{\partial}{\partial x} e^x = e^x\]
\[ \begin{align} \frac{\partial}{\partial x}S(x) & = \frac{\partial}{\partial x}\frac{{1}}{1+e^{-x}} \\ & = \frac{\partial}{\partial (1+e^{-x})}\frac{{1}}{1+e^{-x}} \cdot \frac{\partial}{\partial x}(1+e^{-x}) \\ & = \frac{-1}{(1+e^{-x})^2} \cdot (-e^{-x}) \\ & = \frac{e^{-x}}{(1+e^{-x})^2} \\ & = \frac{1}{1+e^{-x}} \cdot (1 - \frac{1}{1+e^{-x}}) \\ & = S(x)(1-S(x)) \end{align} \]
– \(b_0\)的偏導函數:
\[ \begin{align} \frac{\partial}{\partial b_{0}} loss & = \frac{\partial}{\partial p} diff(y, p) \cdot \frac{\partial}{\partial b_{0}} p \\ & = \frac{{1}}{2n}\sum \limits_{i=1}^{n} \frac{\partial}{\partial p_i} \left(y_{i} - p_{i}\right)^{2} \cdot \frac{\partial}{\partial lp_i} \frac{{1}}{1+e^{-lp_i}} \cdot \frac{\partial}{\partial b_{0}} lp_i \\ & = \frac{{1}}{n}\sum \limits_{i=1}^{n} \left(p_{i} - y_{i} \right) \cdot p_{i} \cdot (1 - p_{i}) \cdot \frac{\partial}{\partial b_{0}} (b_{0} + b_{1}x_i) \\ & = \frac{{1}}{n} \sum \limits_{i=1}^{n} \left(p_{i} - y_{i}\right) \cdot p_{i} \cdot (1 - p_{i}) \end{align} \]
– \(b_1\)的偏導函數(過程略):
\[ \begin{align} \frac{\partial}{\partial b_{1}} loss & = \frac{{1}}{n}\sum \limits_{i=1}^{n} \left(p_{i} - y_{i} \right) \cdot p_{i} \cdot (1 - p_{i}) \cdot \frac{\partial}{\partial b_{1}} (b_{0} + b_{1}x_i) \\ & = \frac{{1}}{n} \sum \limits_{i=1}^{n} \left(p_{i} - y_{i}\right) \cdot p_{i} \cdot (1 - p_{i}) \cdot x_i \end{align} \]
x = 1:10
y = c(0, 0, 1, 0, 1, 0, 1, 0, 1, 1)
pred.fun = function(b0, b1, x = x) {
p = 1 / (1 + exp(- b0 - b1 * x))
return(p)
}
loss.fun = function(b0, b1, x = x, y = y) {
p = pred.fun(b0 = b0, b1 = b1, x = x)
loss = 1/(2*length(x)) * sum((y - p)^2)
return(loss)
}
differential.fun.b0 = function(b0, b1, x = x, y = y) {
p = pred.fun(b0 = b0, b1 = b1, x = x)
return(-sum((y - p)*p*(1-p))/length(x))
}
differential.fun.b1 = function(b0, b1, x = x, y = y) {
p = pred.fun(b0 = b0, b1 = b1, x = x)
return(-sum((y - p)*p*(1-p)*x)/length(x))
}
model = glm(y~x, family = 'binomial')
print(model)
##
## Call: glm(formula = y ~ x, family = "binomial")
##
## Coefficients:
## (Intercept) x
## -1.9957 0.3629
##
## Degrees of Freedom: 9 Total (i.e. Null); 8 Residual
## Null Deviance: 13.86
## Residual Deviance: 11.67 AIC: 15.67
num.iteration = 1000
lr = 0.1
ans_b0 = rep(0, num.iteration)
ans_b1 = rep(0, num.iteration)
for (i in 2:num.iteration) {
ans_b0[i+1] = ans_b0[i] - lr * differential.fun.b0(b0 = ans_b0[i], b1 = ans_b1[i], x = x, y = y)
ans_b1[i+1] = ans_b1[i] - lr * differential.fun.b1(b0 = ans_b0[i], b1 = ans_b1[i], x = x, y = y)
}
print(tail(ans_b0, 1))
[1] -1.201455
print(tail(ans_b1, 1))
[1] 0.2376567
– \(b_0\)的偏導函數:
\[ \begin{align} \frac{\partial}{\partial b_{0}} loss & = \frac{{1}}{n} \sum \limits_{i=1}^{n} \left(p_{i} - y_{i}\right) \cdot p_{i} \cdot (1 - p_{i}) \end{align} \]
– \(b_1\)的偏導函數:
\[ \begin{align} \frac{\partial}{\partial b_{1}} loss & = \frac{{1}}{n} \sum \limits_{i=1}^{n} \left(p_{i} - y_{i}\right) \cdot p_{i} \cdot (1 - p_{i}) \cdot x_i \end{align} \]
\[loss = CE(y, p) = \frac{{1}}{n}\sum \limits_{i=1}^{n} -\left(y_{i} \cdot log(p_{i}) + (1-y_{i}) \cdot log(1-p_{i})\right)\]
這個函數的特色是,當\(y=1\)時,我們會希望\(p\)盡可能接近1,而當\(y=0\)時,我們又會希望\(p\)盡可能接近0,從而達到我們的目標。
現在讓我們結合交叉熵與邏輯斯回歸,並重新計算\(b_0\)以及\(b_1\)的偏導函數:
\[\frac{\partial}{\partial x} log(x) = \frac{{1}}{x}\]
\[ \begin{align} \frac{\partial}{\partial p}CE(y, p) & = - ( \frac{\partial}{\partial p}y \cdot log(p) + \frac{\partial}{\partial p}(1-y) \cdot log(1-p) ) \\ & = - (\frac{y}{p} - \frac{1-y}{1-p} )\\ & = \frac{p-y}{p(1-p)} \end{align} \]
– \(b_0\)的偏導函數:
\[ \begin{align} \frac{\partial}{\partial b_{0}} loss & = \frac{\partial}{\partial p} CE(y, p) \cdot \frac{\partial}{\partial b_{0}} p \\ & = \frac{1}{n}\sum \limits_{i=1}^{n}\frac{\partial}{\partial p_i}CE(y_i, p_i) \cdot \frac{\partial}{\partial lp_i} \frac{{1}}{1+e^{-lp_i}} \cdot \frac{\partial}{\partial b_{0}} lp_i \\ & = \frac{1}{n}\sum \limits_{i=1}^{n}\frac{p_i - y_i}{p_i(1-p_i)} \cdot p_i(1-p_i) \cdot \frac{\partial}{\partial b_{0}} (b_{0} + b_{1}x_i) \\ & = \frac{1}{n}\sum \limits_{i=1}^{n} p_i - y_i \end{align} \]
– \(b_1\)的偏導函數(過程略):
\[ \begin{align} \frac{\partial}{\partial b_{1}} loss & = \frac{1}{n}\sum \limits_{i=1}^{n}\frac{p_i - y_i}{p_i(1-p_i)} \cdot p_i(1-p_i) \cdot \frac{\partial}{\partial b_{1}} (b_{0} + b_{1}x_i) \\ & = \frac{1}{n}\sum \limits_{i=1}^{n} (p_i - y_i)x_i \end{align} \]
x = 1:10
y = c(0, 0, 1, 0, 1, 0, 1, 0, 1, 1)
pred.fun = function(b0, b1, x = x) {
p = 1 / (1 + exp(- b0 - b1 * x))
return(p)
}
loss.fun = function(b0, b1, x = x, y = y) {
p = pred.fun(b0 = b0, b1 = b1, x = x)
loss = -1/length(x) * sum(y * log(p) + (1 - y) * log(1 - p))
return(loss)
}
differential.fun.b0 = function(b0, b1, x = x, y = y) {
p = pred.fun(b0 = b0, b1 = b1, x = x)
return(-sum(y - p)/length(x))
}
differential.fun.b1 = function(b0, b1, x = x, y = y) {
p = pred.fun(b0 = b0, b1 = b1, x = x)
return(-sum((y - p)*x)/length(x))
}
model = glm(y~x, family = 'binomial')
print(model)
##
## Call: glm(formula = y ~ x, family = "binomial")
##
## Coefficients:
## (Intercept) x
## -1.9957 0.3629
##
## Degrees of Freedom: 9 Total (i.e. Null); 8 Residual
## Null Deviance: 13.86
## Residual Deviance: 11.67 AIC: 15.67
num.iteration = 1000
lr = 0.1
ans_b0 = rep(0, num.iteration)
ans_b1 = rep(0, num.iteration)
for (i in 2:num.iteration) {
ans_b0[i+1] = ans_b0[i] - lr * differential.fun.b0(b0 = ans_b0[i], b1 = ans_b1[i], x = x, y = y)
ans_b1[i+1] = ans_b1[i] - lr * differential.fun.b1(b0 = ans_b0[i], b1 = ans_b1[i], x = x, y = y)
}
print(tail(ans_b0, 1))
[1] -1.952537
print(tail(ans_b1, 1))
[1] 0.3564134
– 伯努利分布的機率質量函數為
\[ Pr(p,y) = p^y(1-p)^{1-y} \]
\[p = \frac{{1}}{1+e^{-b_{0} - b_{1}x}}\]
\[max(\prod \limits_{i=1}^{n}Pr(p_i,y_i))\]
\[max(log(\prod \limits_{i=1}^{n}Pr(p_i,y_i))) = max(\sum \limits_{i=1}^{n}\left(y_{i} \cdot log(p_{i}) + (1-y_{i}) \cdot log(1-p_{i})\right))\]
\[min(CE(y, p)) = min(-\frac{{1}}{n}\sum \limits_{i=1}^{n} \left(y_{i} \cdot log(p_{i}) + (1-y_{i}) \cdot log(1-p_{i})\right))\]
– 其中微分工具在未來的幾節課內還是會經常使用到,請同學務必熟悉相關操作。
– 在進入下個單元以前,同學可以思考一下,線性迴歸與邏輯斯迴歸有哪些缺陷?該如何改進?